Two weeks ago there was a SONORUS meeting and school in Naples and Sorrento, Italy. The topic of the school was 'Computational Soundscape Analysis'. The next meeting will be in October in Gothenburg, Sweden.
Articles tagged with acoustics
Paper at Euronoise
The timetable for Euronoise 2015 is available. I will present the paper Determining an Empirical Emission Model for the Auralization of Jet Aircraft on Tuesday the second at nine in the morning.
Update
Seems like my presentation is moved to 10:20.
Slides on Zenodo
As posted earlier I've been uploading to Zenodo almost all slides, posters and proceedings that I made during my project. This includes the slides (as well as two auralizations) of the presentation I gave at the 168th ASA meeting in Indianapolis last year.
Auralization: Doppler effect
Notebook The goal of the project that I'm working on is to develop a tool to synthesize how it sounds when an aircraft flies over an urban environment. One part of the project deals with the propagation model, which also includes the Doppler shift. Let's have a look now at how we can include the Doppler shift in auralizations.
Doppler shift equation¶
The following equation is likely known to you. $$ f = \frac{c + v_r}{c + v_s} f_0 $$ You put in the emission frequency $f_0$, the speed of sound $c$, and the velocity of respectively receiver and source $v_r$ and $v_s$. What you get out is the Doppler shifted frequency.
In the time-domain the Doppler shift is implicitly included when considering the propagation delay between source and receiver. It takes $d = s/c$ seconds to travel the source-receiver distance $s$ at the speed of sound $c$. When the distance changes over time, the emission is Doppler shifted.
Applying the Doppler shift to a tone¶
As an example we will now apply the Doppler shift to a pure tone. Let's start of with some imports.
In [1]:import numpy as np %matplotlib inline import matplotlib.pyplot as plt from acoustics import Signal from IPython.display import Audio import warnings warnings.filterwarnings('ignore')
As you can see I'm importing the
Signalclass from the acoustics package. This package is mainly a collection of tools that could be useful to acousticians, and I started this package because I need many of these tools for my research, and I'm sure others can use it as well. The code base is still a bit messy though, and things do break occasionally.We start with generating a pure tone.
In [2]:duration = 10.0 fs = 44100.0 frequency = 5000.0 samples = int(fs*duration) times = np.arange(samples) / fs signal = Signal(np.sin(2.0*np.pi*frequency*times), fs=fs)
As I wrote before the
Signalclass can be found in theacousticspackage. This class is a subclass of thenp.ndarrayand requires besides the array also a sample frequency. The class has some methods for common operations, like calculating different types of spectra and plotting them, as well as calculating other acoustic quantities like the equivalent sound pressure level.In [3]:Audio(signal, rate=signal.fs)
Out[3]:In [4]:fig = signal.spectrogram(ylim=(0.0, 10000.0))
In order to create a Doppler shift we need a moving source or receiver. In this case, let's consider a moving source and a non-moving receiver. The source will move straight over the receiver.
In [5]:velocity = 70.0 x = times * velocity x -= x.max()/2.0 y = np.zeros(samples) z = 100.0 * np.ones(samples) position_source = np.vstack((x,y,z)).T position_receiver = np.zeros(3)
In [6]:distance = np.linalg.norm((position_source - position_receiver), axis=-1)
The exact speed of sound in the atmosphere depends on conditions of the atmosphere, like the temperature, pressure, humidity. Furthermore, the speed of sound can vary spatially and temporally due to e.g. wind or temperature gradients, or turbulence. Let's assume for now a constant speed of sound of 343.0 m/s.
In [7]:soundspeed = 343.0
The propagation delay from source to receiver is given simply by
In [8]:delay = distance / soundspeed
Let's have a look at the delay as function of time.
In [9]:fig = plt.figure() ax = fig.add_subplot(111) ax.plot(times, delay) ax.set_xlabel('$t$ in s') ax.set_ylabel('delay in s')
Out[9]:<matplotlib.text.Text at 0x7f77211c85f8>
Now that we've calculated the delay we need to apply this delay to our signal of interest, which is a matter of shifting the samples. However, since we have a discrete signal, and the delays are not integer multiples of the sample time, an interpolation scheme is needed.
Linear interpolation¶
The easiest and likely fastest method is a linear interpolation.
In [10]:def interpolation_linear(signal, times, fs): """Linear interpolation of `signal` at `times`. :param signal: Signal. :param times: Times to sample at. :param fs: Sample frequency. """ k_r = np.arange(0, len(signal), 1) # Create vector of indices k = k_r - times * fs # Create vector of warped indices kf = np.floor(k).astype(int) # Floor the warped indices. Convert to integers so we can use them as indices. R = ( (1.0-k+kf) * signal[kf] + (k-kf) * signal[kf+1] ) * (kf >= 0) #+ 0.0 * (kf<0) return R
Let's apply the delay and listen.
In [11]:delayed_linear = Signal( interpolation_linear(signal, delay, signal.fs), signal.fs)
In [12]:Audio(delayed_linear, rate=delayed_linear.fs)
Out[12]:You can hear the Doppler shift clearly but...there are artifacts! The artifacts are also clearly visible in the spectrogram shown below. The red line shows the Doppler shifted frequency and the other lines that are visible are the artifacts. The artifacts are at least 30 dB less strong than the tone of interest, but we can still clearly hear them since there is no signal besides the tone.
In [13]:fig = delayed_linear.spectrogram(ylim=(0.0, 10000.0))
In most auralizations I create these artifacts are masked. However, there are ways to reduce them. One way would be to upsample the signal before applying this resampling. Another way would be to consider another interpolation scheme.
Lanczos interpolation¶
Theoretically, the optimal reconstruction filter is a
sincfilter. In practice only approximations of this filter can be used, and these approximations are generally achieved by windowing and truncating thesincfunction. One of these approximations is the Lanczos filter or kernel, which is thesincfunction windowed by anothersincfunction. For more information on Lanczos interpolation I can recommend the excellent Wikpedia article on this topic.The following code shows an implementation of Lanczos interpolation in one dimension. In pure Python this would be too slow so I'm using here the Numba just-in-time compiler.
In [14]:import numba import math @numba.jit(nogil=True) def sinc(x): if x == 0: return 1.0 else: return math.sin(x*math.pi) / (x*math.pi) @numba.jit(nogil=True) def _lanczos_window(x, a): if -a < x and x < a: return sinc(x) * sinc(x/a) else: return 0.0 @numba.jit(nogil=True) def _lanczos_resample(signal, samples, output, a): """Sample signal at float samples. """ for index, x in enumerate(samples): if x >= 0.0 and x < len(signal): for i in range(math.floor(x)-a+1, math.floor(x+a)): if i >= 0 and i < len(signal): output[index] += signal[i] * _lanczos_window(x-i, a) return output def interpolation_lanczos(signal, times, fs, a=10): """Lanczos interpolation of `signal` at `times`. :param signal: Signal. :param times: Times to sample at. :param fs: Sample frequency. :param kernelsize: Size of Lanczos kernel :math:`a`. """ samples = -times * fs + np.arange(len(signal)) return _lanczos_resample(signal, samples, np.zeros_like(signal), a)
Let's apply again the delay but now using this filter. Note that I use the default kernelsize, which is quite arbitrary.
In [15]:delayed_lanczos = Signal(interpolation_lanczos(signal, delay, signal.fs), signal.fs)
In [16]:Audio(delayed_lanczos, rate=delayed_lanczos.fs)
Out[16]:The artifacts that were present with linear interpolation are mostly gone now, however, when the change in delay is large, other artifacts are audible.
In [17]:fig = delayed_lanczos.spectrogram(ylim=(0.0, 10000.0))
## Conclusions
The Doppler effect is an interesting physical phenomenon, one that we hear perhaps daily. I've shown how you can synthesize the Doppler shift and considered two interpolation schemes. Which method is preferable depends on your use case. For higher frequencies Lanczos interpolation results in fewer artifacts than linear interpolation. However, computationally Lanczos interpolation is much more demanding:
In [18]:%timeit interpolation_linear(signal, delay, signal.fs) %timeit interpolation_lanczos(signal, delay, signal.fs)
10 loops, best of 3: 26.7 ms per loop 1 loops, best of 3: 597 ms per loop
Research visit to Chalmers
After the ASA meeting in Indianapolis I went to Gothenburg again for a research visit. As I wrote in my previous post it was strange to be there again. Last spring I visited Gothenburg for just a couple of days, but now, staying for seven weeks again is quite different. You start noticing quite a lot of things have changed. Many people I knew there have moved away and I also noticed a couple of restaurants I visited have closed since. Despite those things it was nice meeting old friends again.
So, as I wrote in the title I went there on a research visit. The main reason for my visit was to attend some courses at Chalmers but to also work on my research. Initially the idea was to investigate further the model we're developing to deal with atmospheric turbulence in aircraft auralisations. However, in the end the courses took more time then I expected, so I didn't really get to work on the turbulence model anymore. However, I did manage to implement one nice part of the auralisation tool, namely support for Ambisonics.
Besides Chalmers I also visited SP in Borås. SP was a familar place for me as well since I did an internship there in 2010. Earlier I had made recordings with a SoundField microphone, however, at Empa we do not have an Ambisonics setup. SP has a listening room with a third order Ambisonics system, giving me the possibility to listen to the first order recording as well as to higher order auralisations.
One visit I met up with Peter Lundén who installed the system. After some problem solving we managed to get both the recordings and auralisations working. Unfortunately there was one problem with the auralisations, somehow halfway the aircraft makes a U-turn. While I haven't checked it any further, I think it is due to the inclusion of the Condon-Shortley phase in the function I used to calculate the spherical harmonics.
Forum Acusticum 2014, Krakow
Last week the conference Forum Acusticum 2014 took place in the city of Krakow, Poland. From Monday till Friday there were presentations as well as meetings of technical committees. The presentations covered a wide range of topics like structural acoustics, human response and auralisation.
A special session was organised on Urban Sound Planning as well, in which many of my project members presented their current work. I'm glad a lot of people attended the session.
I gave a presentation with the title Modelling sound propagation in the presence of turbulence for the auralisation of aircraft noise in a session on Environmental Acoustics. This session was Friday morning early, which was a bit unfortunate, considering there was a nice party with the Young Acousticians Network the evening before. I was surprised there were still so many people that morning!
On Thursday I joined a discussion on benchmarking for computational acoustics. When implementing a model you would like to be able to validate this. This can sometimes be difficult due to lack of available information or benchmarks. Therefore, the idea is now to put online some kind of repository where people can put their benchmarks. Considering this is a topic of interest in other fields of science I am really curious how they deal with this.
The city of Krakow surprised me a lot. It's quite a green city and there seems to be a good and relaxed mood. There's plenty of pubs and restaurants, especially around the main square which is huge.
Anyway, that's my summary on FA2014!
Back in Gothenburg for SONORUS
Last month I was back in Gothenburg for a week and it was nice! I've been living in Gothenburg for close to two years and being back there brought up some good memories. The reason I was there again was that we had a course for SONORUS on leadership.
In the course we treated topics like communication, knowledge, coaching, group dynamics and decision making. There was one very interesting exercise on group dynamics. In 20 minutes the entire group had to agree on a list of statements related to meetings. We also built a tower using dry spaghetti, a meter of tape and a marshmallow, or at least we tried!
Besides the course we spent time on some practical matters in the project, like discussing how we would cooperate, e.g. with regards to sharing data. Another topic that was discussed is the upcoming International Noise Awareness Day, but more about that later!
Planet Acoustics
Recently I set up a new Planet aimed at acousticians called Planet Acoustics. A Planet is basically a feed reader; it downloads news feeds published by web sites and aggregates their content together into a single combined feed. The purpose of this Planet is to collect blogs and news by and for acousticians.
Planets are very popular in the open-source software development world. Planets like Planet KDE, Planet Debian, Planet Python and Planet SciPy syndicate posts from many people all over the world who write posts related to respectively KDE, the Debian project, Python programming language and the Python module SciPy.
The posts on such a Planet give a good overview of which efforts are currently taking place. Contrary to news posts, press releases or journal articles they are informal. People write about their current work, problems they face, events they attend and often share nice tips and tricks or even detailed tutorials.
I hope that Planet Acoustics will become a Planet with many nice posts regarding acoustics and the life of acousticians. If you would like to have your posts syndicated, then please send an e-mail to admin@planetacoustics.org with your name and a link to your Atom or RSS feed. Hopefully, in time, Planet Acoustics will become just as successfull as those Planets I referred to :-)